284 research outputs found

    Evidence from Patents and Patent Citations on the Impact of NASA and Other Federal Labs on Commercial Innovation

    Get PDF
    We explore the commercialization of government-generated technology by analyzing patents awarded to the U.S. government and the citations to those patents from subsequent patents. We use information on citations to federal patents in two ways: (1) to compare the average technological impact of NASA patents, other Federal' patents, and a random sample of all patents using measures of importance' and generality;' and (2) to trace the geographic location of commercial development by focusing on the location of inventors who cite NASA and other federal patents. We find, first, that the evidence is consistent with increased effort to commercialize federal lab technology generally and NASA specifically. The data reveal a striking NASA golden age' during the second half of the 1970s which remains a puzzle. Second, spillovers are concentrated within a federal lab complex of states representing agglomerations of labs and companies. The technology complex links five NASA states through patent citations: California, Texas, Ohio, DC/Virginia-Maryland, and Alabama. Third, qualitative evidence provides some support for the use of patent citations as proxies for both technological impact and knowledge spillovers.

    Remote capacitive sensing in two-dimension quantum-dot arrays

    Get PDF
    We investigate gate-defined quantum dots in silicon on insulator nanowire field-effect transistors fabricated using a foundry-compatible fully-depleted silicon-on-insulator (FD-SOI) process. A series of split gates wrapped over the silicon nanowire naturally produces a 2×n2\times n bilinear array of quantum dots along a single nanowire. We begin by studying the capacitive coupling of quantum dots within such a 2×\times2 array, and then show how such couplings can be extended across two parallel silicon nanowires coupled together by shared, electrically isolated, 'floating' electrodes. With one quantum dot operating as a single-electron-box sensor, the floating gate serves to enhance the charge sensitivity range, enabling it to detect charge state transitions in a separate silicon nanowire. By comparing measurements from multiple devices we illustrate the impact of the floating gate by quantifying both the charge sensitivity decay as a function of dot-sensor separation and configuration within the dual-nanowire structure.Comment: 9 pages, 3 figures, 35 cites and supplementar

    A Silicon Surface Code Architecture Resilient Against Leakage Errors

    Get PDF
    Spin qubits in silicon quantum dots are one of the most promising building blocks for large scale quantum computers thanks to their high qubit density and compatibility with the existing semiconductor technologies. High fidelity single-qubit gates exceeding the threshold of error correction codes like the surface code have been demonstrated, while two-qubit gates have reached 98\% fidelity and are improving rapidly. However, there are other types of error --- such as charge leakage and propagation --- that may occur in quantum dot arrays and which cannot be corrected by quantum error correction codes, making them potentially damaging even when their probability is small. We propose a surface code architecture for silicon quantum dot spin qubits that is robust against leakage errors by incorporating multi-electron mediator dots. Charge leakage in the qubit dots is transferred to the mediator dots via charge relaxation processes and then removed using charge reservoirs attached to the mediators. A stabiliser-check cycle, optimised for our hardware, then removes the correlations between the residual physical errors. Through simulations we obtain the surface code threshold for the charge leakage errors and show that in our architecture the damage due to charge leakage errors is reduced to a similar level to that of the usual depolarising gate noise. Spin leakage errors in our architecture are constrained to only ancilla qubits and can be removed during quantum error correction via reinitialisations of ancillae, which ensure the robustness of our architecture against spin leakage as well. Our use of an elongated mediator dots creates spaces throughout the quantum dot array for charge reservoirs, measuring devices and control gates, providing the scalability in the design

    Why compare marine ecosystems?

    Get PDF
    This paper is not subject to U.S. copyright. The definitive version was published in ICES Journal of Marine Science: Journal du Conseil 67 (2010): 1-9, doi:10.1093/icesjms/fsp221.Effective marine ecosystem-based management (EBM) requires understanding the key processes and relationships controlling the aspects of biodiversity, productivity, and resilience to perturbations. Unfortunately, the scales, complexity, and non-linear dynamics that characterize marine ecosystems often confound managing for these properties. Nevertheless, scientifically derived decision-support tools (DSTs) are needed to account for impacts resulting from a variety of simultaneous human activities. Three possible methodologies for revealing mechanisms necessary to develop DSTs for EBM are: (i) controlled experimentation, (ii) iterative programmes of observation and modelling ("learning by doing"), and (iii) comparative ecosystem analysis. We have seen that controlled experiments are limited in capturing the complexity necessary to develop models of marine ecosystem dynamics with sufficient realism at appropriate scales. Iterative programmes of observation, model building, and assessment are useful for specific ecosystem issues but rarely lead to generally transferable products. Comparative ecosystem analyses may be the most effective, building on the first two by inferring ecosystem processes based on comparisons and contrasts of ecosystem response to human-induced factors. We propose a hierarchical system of ecosystem comparisons to include within-ecosystem comparisons (utilizing temporal and spatial changes in relation to human activities), within-ecosystem-type comparisons (e.g. coral reefs, temperate continental shelves, upwelling areas), and cross-ecosystem-type comparisons (e.g. coral reefs vs. boreal, terrestrial vs. marine ecosystems). Such a hierarchical comparative approach should lead to better understanding of the processes controlling biodiversity, productivity, and the resilience of marine ecosystems. In turn, better understanding of these processes will lead to the development of increasingly general laws, hypotheses, functional forms, governing equations, and broad interpretations of ecosystem responses to human activities, ultimately improving DSTs in support of EBM

    Changes in the size structure of marine fish communities

    Get PDF
    Marine ecosystems have been heavily impacted by fishing pressure, which can cause major changes in the structure of communities. Fishing directly removes biomass and causes secondary effects such as changing predatory and competitive interactions and altering energy pathways, all of which affect the functional groups and size distributions of marine ecosystems. We conducted a meta-analysis of eighteen trawl surveys from around the world to identify if there have been consistent changes in size-structure and life history groups across ecosystems. Declining biomass trends for larger fish and invertebrates were present in nine systems, all in the North Atlantic, while seven ecosystems did not exhibit consistent declining trends in larger organisms. Two systems had alternative patterns. Smaller taxa, across all ecosystems, had biomass trends with time that were typically flat or slightly increasing. Changes in the ratio of pelagic taxa to demersal taxa were variable across the surveys. Pelagic species were not uniformly increasing, but did show periods of increase in certain regions. In the western Atlantic, the pelagic-to-demersal ratio increased across a number of surveys in the 1990s and declined in the mid 2000s. The trawl survey data suggest there have been considerable structural changes over time and region, but the patterns are not consistent across all ecosystems

    Toward cyberinfrastructure to facilitate collaboration and reproducibility for marine integrated ecosystem assessments

    Get PDF
    Author Posting. © The Author(s), 2016. This is the author's version of the work. It is posted here under a nonexclusive, irrevocable, paid-up, worldwide license granted to WHOI. It is made available for personal use, not for redistribution. The definitive version was published in Earth Science Informatics 10 (2017): 85-97, doi:10.1007/s12145-016-0280-4.There is a growing need for cyberinfrastructure to support science-based decision making in management of natural resources. In particular, our motivation was to aid the development of cyberinfrastructure for Integrated Ecosystem Assessments (IEAs) for marine ecosystems. The IEA process involves analysis of natural and socio-economic information based on diverse and disparate sources of data, requiring collaboration among scientists of many disciplines and communication with other stakeholders. Here we describe our bottom-up approach to developing cyberinfrastructure through a collaborative process engaging a small group of domain and computer scientists and software engineers. We report on a use case evaluated for an Ecosystem Status Report, a multi-disciplinary report inclusive of Earth, life, and social sciences, for the Northeast U.S. Continental Shelf Large Marine Ecosystem. Ultimately, we focused on sharing workflows as a component of the cyberinfrastructure to facilitate collaboration and reproducibility. We developed and deployed a software environment to generate a portion of the Report, retaining traceability of derived datasets including indicators of climate forcing, physical pressures, and ecosystem states. Our solution for sharing workflows and delivering reproducible documents includes IPython (now Jupyter) Notebooks. We describe technical and social challenges that we encountered in the use case and the importance of training to aid the adoption of best practices and new technologies by domain scientists. We consider the larger challenges for developing end-to-end cyberinfrastructure that engages other participants and stakeholders in the IEA process.Support for this research was provided by the U. S. National Science Foundation #0955649 with additional support to SB by the Investment in Science Fund at Woods Hole Oceanographic Institution

    Bayesian Hierarchical Regression on Clearance Rates in the Presence of Lag and Tail Phases with an Application to Malaria Parasites

    Get PDF
    We present a principled technique for estimating the effect of covariates on malaria parasite clearance rates in the presence of “lag” and “tail” phases through the use of a Bayesian hierarchical linear model. The hierarchical approach enables us to appropriately incorporate the uncertainty in both estimating clearance rates in patients and assessing the potential impact of covariates on these rates into the posterior intervals generated for the parameters associated with each covariate. Furthermore, it permits us to incorporate information about individuals for whom there exists only one observation time before censoring, which alleviates a systematic bias affecting inference when these individuals are excluded. We use a changepoint model to account for both lag and tail phases, and hence base our estimation of the parasite clearance rate only on observations within the decay phase. The Bayesian approach allows us to treat the delineation between lag, decay, and tail phases within an individual\u27s clearance profile as themselves being random variables, thus taking into account the additional uncertainty of boundaries between phases. We compare our method to existing methodology used in the antimalarial research community through a simulation study and show that it possesses desirable frequentist properties for conducting inference. We use our methodology to measure the impact of several covariates on Plasmodium falciparum clearance rate data collected in 2009 and 2010. Though our method was developed with this application in mind, it can be easily applied to any biological system exhibiting these hindrances to estimation

    The SAMI Galaxy Survey: Shocks and Outflows in a normal star-forming galaxy

    Full text link
    We demonstrate the feasibility and potential of using large integral field spectroscopic surveys to investigate the prevalence of galactic-scale outflows in the local Universe. Using integral field data from SAMI and the Wide Field Spectrograph, we study the nature of an isolated disk galaxy, SDSS J090005.05+000446.7 (z = 0.05386). In the integral field datasets, the galaxy presents skewed line profiles changing with position in the galaxy. The skewed line profiles are caused by different kinematic components overlapping in the line-of-sight direction. We perform spectral decomposition to separate the line profiles in each spatial pixel as combinations of (1) a narrow kinematic component consistent with HII regions, (2) a broad kinematic component consistent with shock excitation, and (3) an intermediate component consistent with shock excitation and photoionisation mixing. The three kinematic components have distinctly different velocity fields, velocity dispersions, line ratios, and electron densities. We model the line ratios, velocity dispersions, and electron densities with our MAPPINGS IV shock and photoionisation models, and we reach remarkable agreement between the data and the models. The models demonstrate that the different emission line properties are caused by major galactic outflows that introduce shock excitation in addition to photoionisation by star-forming activities. Interstellar shocks embedded in the outflows shock-excite and compress the gas, causing the elevated line ratios, velocity dispersions, and electron densities observed in the broad kinematic component. We argue from energy considerations that, with the lack of a powerful active galactic nucleus, the outflows are likely to be driven by starburst activities. Our results set a benchmark of the type of analysis that can be achieved by the SAMI Galaxy Survey on large numbers of galaxies.Comment: 17 pages, 15 figures. Accepted to MNRAS. References update

    The practical implications of using standardized estimation equations in calculating the prevalence of chronic kidney disease

    Get PDF
    BACKGROUND: Kidney Disease Outcomes Quality Initiative (KDOQI) chronic kidney disease (CKD) guidelines have focused on the utility of using the modified four-variable MDRD equation (now traceable by isotope dilution mass spectrometry IDMS) in calculating estimated glomerular filtration rates (eGFRs). This study assesses the practical implications of eGFR correction equations on the range of creatinine assays currently used in the UK and further investigates the effect of these equations on the calculated prevalence of CKD in one UK regionMETHODS: Using simulation, a range of creatinine data (30-300 micromol/l) was generated for male and female patients aged 20-100 years. The maximum differences between the IDMS and MDRD equations for all 14 UK laboratory techniques for serum creatinine measurement were explored with an average of individual eGFRs calculated according to MDRD and IDMS &lt; 60 ml/min/1.73 m(2) and 30 ml/min/1.73 m(2). Similar procedures were applied to 712,540 samples from patients &gt; or = 18 years (reflecting the five methods for serum creatinine measurement utilized in Northern Ireland) to explore, graphically, maximum differences in assays. CKD prevalence using both estimation equations was compared using an existing cohort of observed data.RESULTS: Simulated data indicates that the majority of laboratories in the UK have small differences between the IDMS and MDRD methods of eGFR measurement for stages 4 and 5 CKD (where the averaged maximum difference for all laboratory methods was 1.27 ml/min/1.73 m(2) for females and 1.59 ml/min/1.73 m(2) for males). MDRD deviated furthest from the IDMS results for the Endpoint Jaffe method: the maximum difference of 9.93 ml/min/1.73 m(2) for females and 5.42 ml/min/1.73 m(2) for males occurred at extreme ages and in those with eGFR &gt; 30 ml/min/1.73 m(2). Observed data for 93,870 patients yielded a first MDRD eGFR &lt; 60 ml/min/1.73 m(2) in 2001. 66,429 (71%) had a second test &gt; 3 months later of which 47,093 (71%) continued to have an eGFR &lt; 60 ml/min/1.73 m(2). Estimated crude prevalence was 3.97% for laboratory detected CKD in adults using the MDRD equation which fell to 3.69% when applying the IDMS equation. Over 95% of this difference in prevalence was explained by older females with stage 3 CKD (eGFR 30-59 ml/min/1.73 m(2)) close to the stage 2 CKD (eGFR 60-90 ml/min/1.73 m(2)) interface.CONCLUSIONS: Improved accuracy of eGFR is obtainable by using IDMS correction especially in the earlier stages of CKD 1-3. Our data indicates that this improved accuracy could lead to reduced prevalence estimates and potentially a decreased likelihood of onward referral to nephrology services particularly in older females.</p
    corecore